31 research outputs found

    Providing Information Feedback to Bidders in Online Multi-unit Combinatorial Auctions

    Get PDF
    Bidders in online multi-unit combinatorial auctions face the acute problem of estimating the valuations of an immense number of packages. Can the seller guide the bidders to avoid placing bids that are too high or too low? In the single unit case, fast methods are now available for incrementally computing, for each package at each time instant, the recommended lower bound (Deadness Level) and upper bound (Winning Level) on the next bid. But when there are multiple units of items, it becomes difficult to compute the Deadness Level of a package accurately. An upper bound on this quantity can be derived however, and a bid that stays within this bound and the Winning Level is “safe”, in the sense that it is not wasted and has the potential to become a winning bid. What is now needed is an incremental procedure for speeding up the computation of this bound

    Time Delay in Rectification of Faults in Software Projects

    Get PDF
    Software reliability models, such as the Basic (i.e., Exponential) Model and the Logarithmic Poisson Model, make the idealizing assumption that when a failure occurs during a program run, the corresponding fault in the program code is corrected without any loss of time. In practice, it takes time to rectify a fault. This is perhaps one reason why, when the cumulative number of faults is computed using such a model and plotted against time, the fit with observed failure data is often not very close. In this paper, we show how the average delay to rectify a fault can be incorporated as a parameter in the Basic Model, changing the defining differential equation to a differential- difference equation. When this is solved, the time delay for which the fit with observed data is closest can be found. The delay need not be constant during the course of testing, but can change slowly with time, giving a yet closer fit. The pattern of variation during testing of the delay with time can be related both to the learning acquired by the testing team and to the difficulty level of the faults that remain to be discovered in the package. This is likely to prove useful to managers of software projects in the deployment of staff

    The Development, Testing, and Release of Software Systems in the Internet Age: A Generalized Analytical Model

    Get PDF
    A major issue in the production of software by a software company is the estimation of the total expenditure likely to be incurred in developing, testing, and debugging a new package or product. If the development cost and development schedule are assumed known, then the major cost factors are the testing cost, the risk cost for the errors that remain in the software at the end of testing, and the opportunity cost. The control parameters are the times at which testing begins and ends, and the time at which the package is released in the market (or the product is supplied to the customer). By adjusting the values of these parameters, the total expenditure can be minimized. Internet technology makes it possible to provide software patches, and this encourages early release. Here we examine the major cost factors and derive a canonical expression for the minimum total expenditure. We show analytically that when the minimum is achieved (1) testing will continue beyond the time of release and (2) the number of software errors in the package when testing ends will be a constant (i.e., the package will have a guaranteed reliability). We apply the model to a few special scenarios of interest and derive their properties. It is shown that the incorporation, as a separate item, of the cost incurred to fix the errors discovered during testing has only a marginal effect on the canonical expression derived earlier

    Graph search methods for non-order-preserving evaluation functions: applications to job sequencing problems

    Get PDF
    AbstractGraph search with A∗ is frequently faster than tree search. But A∗ graph search operates correctly only when the evaluation function is order-preserving. In the non-orderpreserving case, no paths can be discarded and the entire explicit graph must be stored in memory. Such situations arise in one-machine minimum penalty job sequencing problems when setup times are sequence dependent. GREC, the unlimited memory version of a memory-constrained search algorithm of the authors called MREC, has a clear advantage over A∗in that it is able to find optimal solutions to such problems. At the same time, it is as efficient as A∗ in solving graph search problems with order-preserving evaluation functions. Experimental results indicate that in the non-order-preserving case, GREC is faster than both best-first and depth-first tree search, and can solve problem instances of larger size than best-first tree search

    SATISFIABILITY METHODS FOR COLOURING GRAPHS

    No full text
    The graph colouring problem can be solved using methods based on Satisfiability (SAT). An instance of the problem is defined by a Boolean expression written using Boolean variables and the logical connectives AND, OR and NOT. It has to be determined whether there is an assignment of TRUE and FALSE values to the variables that makes the entire expression true.A SAT problem is syntactically and semantically quite simple. Many Constraint Satisfaction Problems (CSPs)in AI and OR can be formulated in SAT. These make use of two kinds of searchalgorithms: Deterministic and Randomized.It has been found that deterministic methods when run on hard CSP instances are frequently very slow in execution.A deterministic method always outputs a solution in the end, but it can take an enormous amount of time to do so.This has led to the development of randomized search algorithms like GSAT, which are typically based on local (i.e., neighbourhood) search. Such methodshave been applied very successfully to find good solutions to hard decision problems

    Nickel(II) & Copper(II) Complexes of the Schiff Base Derived from Phenylbiguanide & Benzil

    No full text
    610-61

    Complexes of the schiff base derived from phenylbiguanide and diacetyl monoxime with copper(II) and nickel(II)

    Get PDF
    361-365The imine-oxime ligand (H2L) derived from phenyl biguanide and diacetyl monoxime forms 1:1 and 1:2 complexes with Cu(II) and only 1:2 complexes with Ni(II). In all these complexes the ligand acts in the tridentate uninegative form and is mostly protonated. Copper(II) forms complexes of the types [Cu(HLH)X]X where X = Cl- or Br- and [Cu(HLH)2]X2 where X = Cl-, Br-I- or OH- while nickel(II) forms complexes of the types [Ni(HLH)2]X2 where X= C1-, Br- or 1- and [Ni(HL)2]. The 1:1 complexes are square-planar while the 1:2 complexes are distorted octahedral
    corecore